Networks can link nodes by associations {association graph}, with or without scaling.
Networks can have only one or two inputs to each node {sparsely connected}. Networks can have four or more inputs to each node {densely connected} [Kanerva, 1988] {connectedness}.
Networks {feedforward network} can have interconnected units that adjust connection strengths to store processes or representations. Input-layer nodes receive intensities. Middle-layer nodes connect to all input nodes and to all output nodes. Nodes at same level do not interact. Output layer indicates answers by output pattern at nodes. Output from hidden units logarithmically relates to input.
Neuron models {McCulloch-Pitts neuron} can use linear threshold logic units. It strengthens synapse if input fails to fire neuron when expected, or weakens synapse if input fires neuron when not expected. Networks can use McCulloch-Pitts neurons to store representations, match input to representations, and send output.
Networks {pandemonium network} {contention scheduling network} {winner-take-all network} depend on competition among processes {demon}, until only one process is still active. Representations try to inhibit all other representations. The strongest inhibits all others, and program selects it.
Network input-output devices {Perceptron} can alter connections or connection strengths by adjusting weights based on input, using feedback that output was correct or incorrect compared to ideal output {Perceptron learning rule}. Ideal output is one pattern or separable linear patterns. In initial learning period, Perceptrons adjust weights. In later test period, Perceptrons send more or less correct output for inputs.
Systems {random graph} can have nodes and edges with no organization and no order.
clustering
If edge number is smaller than half node number, so edge-to-node ratio is less than 0.5, few nodes cluster, largest cluster is small, and most nodes do not connect to other nodes. Connection growth rate is greatest as edge-to-node ratio increases from 0.5 to 0.6. If edge-to-node ratio equals 0.6 system has phase transition, most nodes cluster, largest cluster is big, and most nodes connect to other nodes. After that, growth slows, because most nodes have connections already.
node number
If node number is small, phase transition has wider edge-to-node ratio. If node number is large, phase transition has smaller ratio.
Graphs {reaction graph} can have nodes that are compounds, such as polymers, and connectors that are reactions.
nodes
Polymers have different lengths. Polymers are products of reactant polymers. Compounds can supply energy to make polymers. Required compounds are polymer units and do not have input reaction paths. Some compounds are never reactants and have no output reaction paths.
connectors
Connectors lead from reactant nodes to product nodes {reaction path}. Reactions can lengthen or shorten polymers. Lengthening polymers requires energy. Shortening polymers releases energy.
process
If existing polymers can make more-complex polymers, number of compound nodes increases and number of reactions increases more. If many different reactions make and break polymers, reaction-to-compound ratio increases exponentially.
catalyst
Reactions can have input catalysts that increase reaction rate but that reaction does not consume. Reaction-graph subsets {catalyzed reaction subgraph} can have only catalyzed reactions.
autocatalytic
Systems {autocatalytic system, reaction graph} can have reactions whose products are reactants and increase rates of reactions that depend on concentration. Autocatalytic systems consume original reactants quickly and end quickly. Catalyzed reaction subgraphs {autocatalyzed subset} are self-catalyzing if all compounds are either food or are catalysis products.
If existing polymers can make more-complex polymers, a small percentage of new polymers can be catalysts. When catalyzed-reaction number becomes more than polymer number, phase transition goes to autocatalytic system.
If autocatalytic systems have food and energy compounds, number of different polymers can increase. If system can double, probably requiring specialized catalysts and molecules, it has reproduced itself {self-reproducing}.
critical
Reaction graphs can have chain reactions {criticality, reaction}. Chain reactions can make same molecules {subcriticality}, so number increases exponentially. Chain reactions can make new molecule types {supracriticality}, so chain reactions make exponentially more types. If different-object-type number increases, supracritical behavior increases. If reaction-catalysis probability increases, supracritical behavior increases.
Networks {transition network} can represent objects as nodes and conjunctions between objects as arcs.
purposes
Transition networks can model binary object conjunctions by AND operations. Transition networks can represent processes. Transition networks do not model object quantities. Transition networks do not model disjunctions, such as inclusive OR.
transition
System states have specific nodes and arcs. Transition from one state to another state has probability. Transitions have directions. There can be final state. Transitions can depend on time, properties, and previous transitions.
comparison
Transition networks relate to algorithms and grammars.
Networks {Boolean network} can have nodes that are either on 1 or off 0. Inputs from other nodes determine node state. Boolean rules can make values 0 or 1 equally likely, make value 0 certain, make value 1 certain, or make any probability. If average Boolean rule makes value 0 or 1 almost certain, system is stable. If average value makes 0 and 1 equally likely, system is unstable. At one probability, system switches rapidly from order to chaos.
Boolean networks {canalyzing Boolean function} can have input that determines node output. For example, OR has two inputs and is canalyzing, because if either input is 1, output is 1. EXCLUSIVE OR is not canalyzing, because inputs depend on each other. If input number is two, 14 of 16 possible Boolean functions are canalyzing. EXCLUSIVE OR and IF AND ONLY IF are non-canalyzing. For Boolean functions with more than two inputs, few are canalyzing. Canalyzing functions have fewer interactions and so are simpler.
Interconnected units {neural network}| can adjust connection strengths to model processes or representations.
Each input-layer unit sends its signal intensity to all middle-layer units, which weight each input.
Each middle-layer unit sends its signal intensity to all output-layer units, which weight each input.
The system can use feedback [Hinton, 1992], feed-forward, and/or human intervention to adjust weights (connection strengths).
To calculate adjusted weight W' using feedback, subtract constant C times partial derivative D of error function e from original weight W: W' = W - C * D(e). The program or programmer can calculate constant C and error function e.
Alternatively, to calculate adjusted weight W' using feedback, add original weight W and constant C times the difference of the current amount c and estimated true amount t: W' = W + C * (c - t). The program or programmer can calculate constant C and estimated t.
Widrow-Huff procedure uses f(s) = s: W' = W + c * (d - f) * X, where d is value and X is factor.
Generalized delta procedure uses f(s) = 1 / (1 + e^-s): W' = W + c * (d - f) * f(1 - f) * X.
Input patterns and output patterns are vectors, so neural networks transform vectors (and so are like tensors). Computation can be serial or parallel (parallel processing).
Note: Units within a layer typically have no connections [Arbib, 2003].
Output units can represent one of the possible input patterns. For example, if the system has 26 output units to detect the 26 alphabet letters, for input pattern A, its output unit is on, and the other 25 output units are off.
Output unit values can represent one of the possible input patterns. For example, if the system has 1 output unit to detect the 26 alphabet letters, for input pattern A, output unit value is 1, and for input pattern Z, output unit value is 26.
The output pattern of the output layer can represent one of the possible input patterns. For example, if the system has 5 output units to detect the 26 alphabet letters, for input pattern A, the output pattern is binary number 00001 = decimal number 1, where 0 is off, 1 is on, and the code for A is 1, code for B is 2, and so on. For input pattern Z, the output pattern is binary number 11010 = decimal number 26.
Output-pattern values can represent one of the possible input patterns. For example, if the system has 2 output units to detect the 26 alphabet letters, for input pattern A, output-pattern value is 01, and for input pattern Z, output-pattern value is 26.
For an analog system, the output pattern of the output layer can resemble an input pattern. For example, to detect the 26 alphabet letters, the system can use 25 input units and 25 output units. For input pattern A, the output pattern resembles A. Units can have continuous values for different intensities.
uses
Neural networks can model processes or representations.
Large neural networks can recognize more than one pattern and distinguish between them.
They can detect pattern unions and intersections. For example, they can recognize words.
Neural networks can recognize patterns similar to the target pattern, so neural networks can generalize to a category. For example, neural networks can recognize the letter T in various fonts.
Because neural networks have many units, if some units fail, pattern recognition can still work.
Neural networks can use many different functions, so neural networks can model most processes and representations. For example, Gabor functions can represent different neuron types, so neural networks can model brain processes and representations.
Neural networks can use two middle layers, in which recurrent pathways between first and second middle layer further refine processing.
vectors
Input patterns and output patterns are vectors (a, b, c, ...), so neural networks transform vectors and so are like tensors.
feedforward
Neural networks use feed-forward parallel processing.
types: non-adaptive
Hopfield nets do not learn and are non-adaptive neural nets, which cannot model statistics.
types: adaptive
Adaptive neural nets can learn and can model statistical inference and data analysis. Hebbian learning can model principal-component analysis. Probabilistic neural nets can model kernel-discriminant analysis. Hamming net uses minimum distance.
types: adaptive with unsupervised learning
Unsupervised learning uses only internal learning, with no corrections from human modelers. Adaptive Resonance Theory requires no noise to learn and cannot model statistics. Linear-function models, such as Learning Matrix, Sparse Distributed Associative Memory, Fuzzy Associative Memory, and Counterpropagation, are feedforward nets with no hidden layer. Bidirectional Associative Memory uses feedback. Kohonen self-organizing maps and reinforcement learning can model Markov decision processes.
types: adaptive with supervised learning
Supervised learning uses internal learning and corrections from human modelers. Adaline, Madaline, Artmap, Backpropagation, Backpropagation through time, Boltzmann Machine, Brain-State-in-a-Box, Fuzzy Cognitive Map, General Regression Neural Network, Learning Vector Quantization, and Probabilistic Neural Network use feedforward. Perceptrons require no noise to learn and cannot model statistics. Kohonen nets for adaptive vector quantization can model K-means cluster analysis.
brains compared to neural networks
Brains and neural networks use parallel processing, can use recurrent processing, have many units (and so still work if units fail), have input and output vectors, use tensor processing, can generalize, can distinguish, and can use set union and intersection.
Brains use many same-layer neuron cross-connections, but neural networks do not need them because they add no processing power.
The neural-network input layer consists of cortical neuron-array registers that receive from retina and thalamus. Weighting of inputs to the middle layer depends on visual-system knowledge of information about the reference beam. The middle layer is neuron-array registers that store perceptual patterns and make coherent waves. The output layer is perceptions in mental space.
Neurons are not the input-layer, middle-layer, or output-layer units. Units are abstract registers that combine and integrate neurons to represent (complex) numbers. Input layer, middle layer, and output layer are not physical arrays but programmed arrays (in visual and association cortex).
Neural-network processing is not neural processing. Processing uses algorithms that calculate with the numbers in registers. Layers, units, and processing are abstract, not directly physical.
Models {nerve net}| can simulate object-recognition neuron networks. Nerve nets assign weight to nodes. Node input intensity multiplies weight to give output. Vector sum of outputs over nodes is object representation.
Networks {Hopfield network} can use content-addressable memory, with weighted features and feedback.
Unit sets {context layer, network} can receive a hidden-layer copy and then add back to hidden layer {simple recurrent network}.
Geometrical neural nets {tensor network theory} can make space-time coordinate transformations [Pellionisz and Llinas, 1982].
3-Computer Science-System Analysis
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0225